Fast Spectral Low Rank Matrix Approximation

نویسندگان

  • Haishan Ye
  • Zhihua Zhang
چکیده

In this paper, we study subspace embedding problem and obtain the following results: 1. We extend the results of approximate matrix multiplication from the Frobenius norm to the spectral norm. Assume matrices A and B both have at most r stable rank and r̃ rank, respectively. Let S be a subspace embedding matrix with l rows which depends on stable rank, then with high probability, we have ‖ASSB−AB‖2 < ε‖A‖2‖B‖2. 2. We develop a class of fast approximate generalized linear regression algorithms with respect to the spectral norm. We design a new least square regression algorithm in which subspace embedding matrix S has ( √ ε/r, δ)-JL moment property. Here r is the stable rank A, which is never greater than rank of A. Let x = argmin x ‖SAx− Sb‖2, we have ‖Ax − b‖2 ≤ (1 + ε)min x ‖Ax− b‖2. 3. We give a concise proof and tighter error upper bound for the randomized SVD of Halko et al. (2011). Besides gaussian random projection and Subsample Randomized Hadamard Transform in Halko et al. (2011), we find that a large class of matrices which have oblivious l2-subspace embedding property can be used in randomized SVD. We give a fast randomized SVD algorithm using sparse embedding matrix. We give a framework that composing different subspace embedding matrices still has the same relative error bound. 4. We design a fast low rank approximation algorithm with relative error based on spectral norm and the stable rank. For A ∈ R, given k, and ε, we get a decomposition of A into L, D, W, such that ‖A− LDW ‖2 ≤ (1 + ε)‖A−Ak‖2, and our algorithm runs in Õ(nnz(A)ε+(n+ d)r 1 /ε+ r1r 2 2 /ε). A2k and Ad/k both have stable rank at most r1. SA and A−A(SA)†SA both have stable rank at most r2. And S is a sparse subspace embedding matrix with Õ(r1/ε) rows.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-level Low-rank Approximation-based Spectral Clustering for image segmentation

Spectral clustering is a well-known graph-theoretic approach of finding natural groupings in a given dataset, and has been broadly used in image segmentation. Nowadays, High-Definition (HD) images are widely used in television broadcasting and movies. Segmenting these high resolution images presents a grand challenge to the current spectral clustering techniques. In this paper, we propose an ef...

متن کامل

Fast Structured Direct Spectral Methods for Differential Equations with Variable Coefficients, I. The One-Dimensional Case

We study the rank structures of the matrices in Fourierand Chebyshev-spectral methods for differential equations with variable coefficients in one dimension. We show analytically that these matrices have a so-called low-rank property, not only for constant or smooth variable coefficients, but also for coefficients with steep gradients and/or high variations (large ratios in their maximum-minimu...

متن کامل

Fast Low Rank Approximation of a Sylvester Matrix by Structured Total Least Norm

The problem of approximating the greatest common divisor(GCD) for polynomials with inexact coefficients can be formulated as a low rank approximation problem with a Sylvester matrix. In this paper, we present an algorithm based on fast Structured Total Least Norm(STLN) for constructing a Sylvester matrix of given lower rank and obtaining the nearest perturbed polynomials with exact GCD of given...

متن کامل

Krylov Subspace Recycling for Fast Iterative Least-Squares in Machine Learning

Solving symmetric positive definite linear problems is a fundamental computational task in machine learning. The exact solution, famously, is cubicly expensive in the size of the matrix. To alleviate this problem, several linear-time approximations, such as spectral and inducing-point methods, have been suggested and are now in wide use. These are low-rank approximations that choose the low-ran...

متن کامل

Fast Exact Matrix Completion with Finite Samples

Matrix completion is the problem of recovering a low rank matrix by observing a small fraction of its entries. A series of recent works (Keshavan, 2012; Jain et al., 2013; Hardt, 2014) have proposed fast non-convex optimization based iterative algorithms to solve this problem. However, the sample complexity in all these results is sub-optimal in its dependence on the rank, condition number and ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015